内核方法是学习算法,这些算法享有坚实的理论基础,同时遭受了重要的计算局限性。素描包括在缩小尺寸的子空间中寻找解决方案,是一种经过广泛研究的方法来减轻这种数值负担。但是,快速的草图策略(例如非自适应子采样)大大降低了算法的保证,而理论上准确的草图(例如高斯曲线)在实践中的实践相对较慢。在本文中,我们介绍了$ p $ -sparsified的草图,这些草图结合了两种方法的好处,以实现统计准确性和计算效率之间的良好权衡。为了支持我们的方法,我们在单个和多个输出问题上得出了多余的风险范围,并具有通用Lipschitz损失,从可靠的回归到多个分位数回归为广泛的应用提供了新的保证。我们还提供了草图优于最近SOTA方法的优势的经验证据。
translated by 谷歌翻译
自从有新闻以来,假新闻一直存在,从谣言到印刷媒体再到广播电视。最近,信息时代及其沟通和互联网突破加剧了假新闻的传播。此外,除了电子商务外,当前的互联网经济取决于广告,视图和点击,这促使许多开发人员诱饵最终用户点击链接或广告。因此,假新闻通过社交媒体网络的狂野传播影响了现实世界中的问题,从选举到5G的采用以及Covid-19大流行的处理。自虚假新闻出现以来,从事实检查员到基于人工智能的探测器,探测和阻止假新闻的努力就一直存在。由于假新闻传播器采用了更复杂的技术,因此解决方案仍在不断发展。在本文中,R代码已用于研究和可视化现代假新闻数据集。我们使用聚类,分类,相关性和各种图来分析和呈现数据。该实验显示了分类器在与虚假新闻中分开的效率高效率。
translated by 谷歌翻译
成像生物标志物提供了一种无创的方法来预测治疗前免疫疗法的反应。在这项工作中,我们提出了一种从卷积神经网络(CNN)计算出的新型深度放射素特征(DRF),该特征捕获了与免疫细胞标记和整体生存有关的肿瘤特征。我们的研究使用四个MRI序列(T1加权,T1加权后对比,T2加权和FLAIR),并具有151例脑肿瘤患者的相应免疫细胞标记。该方法通过在MRI扫描的标记肿瘤区域内聚集了预训练的3D-CNN的激活图,从而提取了180个DRF。这些功能提供了编码组织异质性的区域纹理的紧凑而有力的表示。进行了一组全面的实验,以评估所提出的DRF和免疫细胞标记之间的关系,并衡量它们与整体生存的关联。结果表明,DRF和各种标记之间存在很高的相关性,以及根据这些标记分组的患者之间的显着差异。此外,将DRF,临床特征和免疫细胞标记组合为随机森林分类器的输入有助于区分短期和长期生存结果,AUC为72 \%,P = 2.36 $ \ times $ 10 $^{ - 5} $。这些结果证明了拟议的DRF作为非侵入性生物标志物在预测脑肿瘤患者的治疗反应中的有用性。
translated by 谷歌翻译
Non-linear state-space models, also known as general hidden Markov models, are ubiquitous in statistical machine learning, being the most classical generative models for serial data and sequences in general. The particle-based, rapid incremental smoother PaRIS is a sequential Monte Carlo (SMC) technique allowing for efficient online approximation of expectations of additive functionals under the smoothing distribution in these models. Such expectations appear naturally in several learning contexts, such as likelihood estimation (MLE) and Markov score climbing (MSC). PARIS has linear computational complexity, limited memory requirements and comes with non-asymptotic bounds, convergence results and stability guarantees. Still, being based on self-normalised importance sampling, the PaRIS estimator is biased. Our first contribution is to design a novel additive smoothing algorithm, the Parisian particle Gibbs PPG sampler, which can be viewed as a PaRIS algorithm driven by conditional SMC moves, resulting in bias-reduced estimates of the targeted quantities. We substantiate the PPG algorithm with theoretical results, including new bounds on bias and variance as well as deviation inequalities. Our second contribution is to apply PPG in a learning framework, covering MLE and MSC as special examples. In this context, we establish, under standard assumptions, non-asymptotic bounds highlighting the value of bias reduction and the implicit Rao--Blackwellization of PPG. These are the first non-asymptotic results of this kind in this setting. We illustrate our theoretical results with numerical experiments supporting our claims.
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
This paper presents our solutions for the MediaEval 2022 task on DisasterMM. The task is composed of two subtasks, namely (i) Relevance Classification of Twitter Posts (RCTP), and (ii) Location Extraction from Twitter Texts (LETT). The RCTP subtask aims at differentiating flood-related and non-relevant social posts while LETT is a Named Entity Recognition (NER) task and aims at the extraction of location information from the text. For RCTP, we proposed four different solutions based on BERT, RoBERTa, Distil BERT, and ALBERT obtaining an F1-score of 0.7934, 0.7970, 0.7613, and 0.7924, respectively. For LETT, we used three models namely BERT, RoBERTa, and Distil BERTA obtaining an F1-score of 0.6256, 0.6744, and 0.6723, respectively.
translated by 谷歌翻译
In recent years, social media has been widely explored as a potential source of communication and information in disasters and emergency situations. Several interesting works and case studies of disaster analytics exploring different aspects of natural disasters have been already conducted. Along with the great potential, disaster analytics comes with several challenges mainly due to the nature of social media content. In this paper, we explore one such challenge and propose a text classification framework to deal with Twitter noisy data. More specifically, we employed several transformers both individually and in combination, so as to differentiate between relevant and non-relevant Twitter posts, achieving the highest F1-score of 0.87.
translated by 谷歌翻译
Nowadays, the current neural network models of dialogue generation(chatbots) show great promise for generating answers for chatty agents. But they are short-sighted in that they predict utterances one at a time while disregarding their impact on future outcomes. Modelling a dialogue's future direction is critical for generating coherent, interesting dialogues, a need that has led traditional NLP dialogue models that rely on reinforcement learning. In this article, we explain how to combine these objectives by using deep reinforcement learning to predict future rewards in chatbot dialogue. The model simulates conversations between two virtual agents, with policy gradient methods used to reward sequences that exhibit three useful conversational characteristics: the flow of informality, coherence, and simplicity of response (related to forward-looking function). We assess our model based on its diversity, length, and complexity with regard to humans. In dialogue simulation, evaluations demonstrated that the proposed model generates more interactive responses and encourages a more sustained successful conversation. This work commemorates a preliminary step toward developing a neural conversational model based on the long-term success of dialogues.
translated by 谷歌翻译
Three-dimensional (3D) technologies have been developing rapidly recent years, and have influenced industrial, medical, cultural, and many other fields. In this paper, we introduce an automatic 3D human head scanning-printing system, which provides a complete pipeline to scan, reconstruct, select, and finally print out physical 3D human heads. To enhance the accuracy of our system, we developed a consumer-grade composite sensor (including a gyroscope, an accelerometer, a digital compass, and a Kinect v2 depth sensor) as our sensing device. This sensing device is then mounted on a robot, which automatically rotates around the human subject with approximate 1-meter radius, to capture the full-view information. The data streams are further processed and fused into a 3D model of the subject using a tablet located on the robot. In addition, an automatic selection method, based on our specific system configurations, is proposed to select the head portion. We evaluated the accuracy of the proposed system by comparing our generated 3D head models, from both standard human head model and real human subjects, with the ones reconstructed from FastSCAN and Cyberware commercial laser scanning systems through computing and visualizing Hausdorff distances. Computational cost is also provided to further assess our proposed system.
translated by 谷歌翻译
We propose a 6D RGB-D odometry approach that finds the relative camera pose between consecutive RGB-D frames by keypoint extraction and feature matching both on the RGB and depth image planes. Furthermore, we feed the estimated pose to the highly accurate KinectFusion algorithm, which uses a fast ICP (Iterative Closest Point) to fine-tune the frame-to-frame relative pose and fuse the depth data into a global implicit surface. We evaluate our method on a publicly available RGB-D SLAM benchmark dataset by Sturm et al. The experimental results show that our proposed reconstruction method solely based on visual odometry and KinectFusion outperforms the state-of-the-art RGB-D SLAM system accuracy. Moreover, our algorithm outputs a ready-to-use polygon mesh (highly suitable for creating 3D virtual worlds) without any postprocessing steps.
translated by 谷歌翻译